I am trying to get a cloud server (built from an image I have saved) to execute a script from a URL upon startup, but the script is not executing properly.
I used one of the answers from Execute bash script from URL to configure a curl script, and am executing that script via the #reboot directive in crontab (Ubuntu 14.04). My setup looks like this:
The script contains these commands:
user#cloud-server-01:~$ cat startup.sh
#! /bin/sh
/usr/bin/curl -s http://192.168.100.59/user/startup.sh.txt | bash /dev/stdin
I call the script via crontab:
user#cloud-server-01:~$ crontab -l
#reboot /home/user/startup.sh > startup.log 2>&1 &
If I manually execute the script from the command line using exactly the same command, it works fine. However, executing by crontab on startup, it seems to hang, and I see the following processes running:
user#cloud-server-01:~$ ps ux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
user 1287 0.0 0.1 4444 632 ? S 19:17 0:00 /bin/sh /home/user/startup.sh
user 1290 0.0 0.7 89536 3536 ? S 19:17 0:00 /usr/bin/curl -s http://192.168.100.59/user/startup.sh.txt
user 1291 0.0 0.2 12632 1196 ? S 19:17 0:00 bash /dev/stdin
Am I missing something obvious in why the cron execution isn't giving me the same results as my command line?
EDIT:
Thanks Olof for the redirect on my troubleshooting. In fact, curl is executing, and if I wait long enough (several minutes) it appears to operate as desired. I suspect the problem is that the network interface and/or URL is not available when curl is initially called, and while it may poll for a connection, it probably backs off its polling interval. So the question now becomes, "How do I check whether I have a connection to this URL before calling curl?"
This is not a bash problem; your curl command is still running so bash is still running, waiting for curl to close the pipe that the bash shell is reading from.
To troubleshoot your curl invocation I would run it first without piping to bash to check that I get the output I expected.
The hint in Olof's answer got me there, but I'm posting the full result here for completeness:
Because of a cloud provider's script which takes 20-40 seconds following reboot, my desired connection IP wasn't available to me when I first executed cron. It would either timeout, or connect after a significant delay. I have modified my connection script to poll the connection until it is available before calling curl:
#! /bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOST_IP=192.168.100.59
check_online() {
IS_ONLINE=$(netcat -z -w 5 $HOST_IP 80 && echo 1 || echo 0)
}
# Initial check to see if we're online
check_online
# Loop while we're not online.
while [ $IS_ONLINE -eq 0 ];do
# We're offline. Sleep for a bit, then check again
sleep 5;
check_online
done
# Run remote script
bash <(curl -s http://${HOST_IP}/user/startup.sh.txt)
Related
I'm setting a cron job that is a bash script containing the below:
#!/bin/bash
NUM_CONTAINERS=$(docker ps -q | wc -l)
if [ $NUM_CONTAINERS -lt 40 ]
then
echo "Time: $(date). Restart containers."
cd /opt
pwd
sudo docker kill $(docker ps -q)
docker-compose up -d
echo "Completed."
else
echo Nothing to do
fi
The output is appended to a log file:
>> cron.log
However the output in the cron file only shows:
Time: Sun Aug 15 10:50:01 UTC 2021. Restart containers.
/opt
Completed.
Both command do not seem to execute as I don't see any change in my containers either.
These 2 non working commands work well in a standalone .sh script without condition though.
What am I doing wrong?
User running the cron has sudo privileges, and we can see the second echo printing.
Lots of times, things that work outside of cron don't work within cron because the environment is not set up in the same way.
You should generally capture standard output and error, to see if something going wrong.
For example, use >> cron.log 2>&1 in your crontab file, this will capture both.
There's at least the possibility that docker is not in your path or, even if it is, the docker commands are not working for some other reason (that you're not seeing since you only capture standard output).
Capturing standard error should help out with that, if it is indeed the issue.
As an aside, I tend to use full path names inside cron scripts, or set up very limited environments at the start to ensure everything works correctly (once I've established why it's not working correctly).
I'd like the terminal to return to normal after the bash script has been executed.
#! /bin/bash
echo -ne "\x01\x02\x00\x00\x00\x06\x01\x05\x00\x00\x00\x00" |nc 192.168.0.119 502 > /home/pi/mb.txt
exit 0
Currently, the script runs as expected and the output goes to its destination, but the terminal then hangs after running ./script, waiting for me to hit CTRL-C. I'd like the terminal to return to normal right after the script has run.
nc -N 192.168.0.119 502
From the man page:
-N
shutdown(2) the network socket after EOF on the input. Some servers require this to finish their work.
Note that this may not be available in some versions.
I am logging into a remote server using SSH client. I have written a script that will execute two commands on the server.But, as the first command executes a bash script that calls "bash" command at the end. This results in execution of only one command not the other.
I cannot edit the first script to comment or remove the bash call.
i have written following script:
abc.sh
#!/bin/bash
command1="sudo -u user_abc -H /abc/xyz/start_shell.sh"
command2=".try1.sh"
$command1 && $command2
Only command 1 is getting executed not the second as the "bash" call is creating a new process the second command is not executing.
Solution 1
Since you can execute start_shell.sh you must have read permissions. Therefore, you could copy the script, modify it such that it doesn't call bash anymore, and execute the modified version.
I think this would be the best solution. If you really really really have to use start_shell.sh as is, then you could try one of the following solutions.
Solution 2
Try closing stdin using <&-. An interactive bash session will exit immediately if there is no stdin.
sudo -u user_abc -H /abc/xyz/start_shell.sh <&-; ./try1.sh
Solution 3
Change the order if both commands are independent.
./try1.sh; sudo -u user_abc -H /abc/xyz/start_shell.sh
I am starting ftam server (ft820.rc on CentOS 5) using bash version bash 3.0 and I am having an issue with starting it from the script, namely in the script I do
ssh -nq root#$ip /etc/init.d/ft820.rc start
and the script won't continue after this line, although when I do on the machine defined by $ip
/etc/init.d/ft820.rc start
I will get the prompt back just after the service is started.
This is the code for start in ft820.rc
SPOOLPATH=/usr/spool/vertel
BINPATH=/usr/bin/osi/ft820
CONFIGFILE=${SPOOLPATH}/ffs.cfg
# Set DBUSERID to any value at all. Just need to make sure it is non-null for
# lockclr to work properly.
DBUSERID=
export DBUSERID
# if startup requested then ...
if [ "$1" = "start" ]
then
mask=`umask`
umask 0000
# startup the lock manager
${BINPATH}/lockmgr -u 16
# update attribute database
${BINPATH}/fua ${CONFIGFILE} > /dev/null
# clear concurrency locks
${BINPATH}/finit -cy ${CONFIGFILE} >/dev/null
# startup filestore
${BINPATH}/ffs ${CONFIGFILE}
if [ $? = 0 ]
then
echo Vertel FT-820 Filestore running.
else
echo Error detected while starting Vertel FT-820 Filestore.
fi
umask $mask
I repost here (on request of #Patryk) what I put in the comments on the question:
"is it the same when doing the ssh... in the commandline? ie, can you indeed connect without entering a password, using the pair of private_local_key and the corresponding public_key that you previously inserted in the destination root#$ip:~/.ssh/authorized_keys file ? – Olivier Dulac 20 hours ago "
"you say that, at the commandline (and NOT in the script) you can ssh root#.... and it works without asking for your pwd ? (ie, it can then be run from a script?) – Olivier Dulac 20 hours ago "
" try the ssh without the '-n' and even without -nq at all : ssh root#$ip /etc/init.d/ft820.rc start (you could even add ssh -v , which will show you local (1:) and remote (2:) events in a very verbose way, helping in knowing where it gets stuck exactly) – Olivier Dulac 19 hours ago "
"also : before the "ssh..." line in the script, make another line with, for example: ssh root#ip "set ; pwd ; id ; whoami" and see if that works and shows the correct information. This may help be sure the ssh part is working. The "set" part will also show you the running shell (ex: if it contains BASH= , you're running bash. Otherwise SHELL=... should give a good hint (sometimes not correct) about which shell gets invoked) – Olivier Dulac 19 hours ago "
" please try without the '-n' (= run in background and wait, instead of just run and then quit). It it doesn't work, try adding -t -t -t (3 times) to the ssh, to force it to allocate a tty. But first, please drop the '-n'. – Olivier Dulac 18 hours ago "
Apparently what worked was to add the -t option to the ssh command. (you can go up to put '-t -t -t' to further force it to try to allocate the tty, depending on the situation)
I guess it's because the invoked command expected to be run within an interactive session, and so needed a "tty" to be the stdout
A possibility (but just a wild guess) : the invoked rc script outputs information, but in a buffered environment (ie, when not launched via your terminal), the calling script couldn't see enough lines to fill the buffer and start printing anything out (like when you do a "grep something | somethings else" in a buffered environment and ctrl+c before the buffer was big enough to display anything : you end up thinking no lines were foudn by the grep, whereas there was maybe a few lines already in the buffer). There is tons to be said about buffering, and I am just beginning to read about it all. forcing ssh to allocate a tty made the called command think it was outputting to a live terminal session, and that may have turned off the buffering and allowed the result to show. Maybe in the first case, it worked too, but you could never see the output?
In my crontab file I execute a script like so (I edit the crontab using sudo crontab -e):
01 * * * * bash /etc/m/start.sh
The script runs some other scripts like so:
sudo bash -c "/etc/m/abc.sh --option=1" &
sleep 2
sudo bash -c "/etc/m/abc.sh --option=2" &
When cron runs the script start.sh, I do ps aux | grep abc.sh and I see the abc.sh script running.
After a couple of seconds, the script is no longer running, even though abc.sh should take hours to finish.
If I do sudo bash /etc/m/start.sh & from the command line, everything works fine (the abc.sh scripts run for hours in the background until they complete).
How do I debug this?
Is there something I'm doing that is preventing these scripts from running in the background until they are done?
The program(s) you're starting might be expecting a terminal to send their output to, or receive input from.
If you set the MAILTO= variable, and you have a sendmail(-like) daemon installed, you will get an email with the error message(s) it prints, if there are any:
MAILTO=your#email.address.here.com
01 * * * * bash /path/to/something.sh
Another way to debug would be to run the script from the command line, while redirecting all inputs and outputs:
$ sudo bash -c "foo.sh" > output_file 2>&1 < /dev/null
Also, the system log files (usually found in /var/log) might contain useful hints.