Commands won't write to file when script is executed by cron - bash

I used crontab -e to schedule the execution of a shell script that does ssh calls to a list of servers and gets information and prints to file. The output of crontab -l is:
SHELL = /bin/sh
PATH = /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
* 1 * * 1,2,3,4,5 /bin/bash /Users/cjones/Documents/Development/Scripts/DailyStatus.sh
The script I am running logs to files the output of echo "Beginning remote connections..." >> $logfile however does not log to a file the output of the following loop:
for servers in $(cat hostnames.txt); do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done
Pastebin of the full script: http://pastebin.com/3vD7Bba0
Additional note this script pushes the latest version of a management script then ssh's into the remote server to execute and capture the ouput. This work 100% of the time when ran manually. Any assitance would be helpful thanks!

you need to do SHELL=/bin/sh and the same with PATH. The spaces around = are wrong.
Also, use full paths when calling files in your script when you call it with crontab:
From
for servers in $(cat hostnames.txt); do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done
to
while read $servers
do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done < /path/to/hostnames.txt
^^^^^^^^^
Note the usage of while read; do ... done < file instead of the unnecessary for host in $(cat ...).

Related

Cron + nohup = script in cron cannot find command?

There is a simple cron job:
#reboot /home/user/scripts/run.sh > /dev/null 2>&1
run.sh starts a binary (simple web server):
#!/usr/bin/env bash
NPID=/home/user/server/websrv
if [ ! -f $NPID ]
then
echo "Not started"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
NUM=$(ps ax | grep $(cat $NPID) | grep -v grep | wc -l)
if [ $NUM -lt 1 ]
then
echo "Not working"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
ps ax | grep $(cat $NPID) | grep -v grep
echo "All Ok"
fi
fi
websrv gets JSON from user, and runs work.sh script itselves.
The problem is that sh script, which is invoked by websrv, "does not see" commands and stops with exit 1.
The script work.sh is like this:
#!/bin/sh -e
if [ "$#" -ne 1 ]; then
echo "Usage: $0 INPUT"
exit 1
fi
cd $(dirname $0) #good!
pwd #good!
IN="$1"
echo $IN #good!
KEYFORGIT="/some/path"
eval `ssh-agent -s` #good!
which ssh-add #good! (returns /usr/bin/ssh-add)
ssh-add $KEYFORGIT/openssh #error: exit 1!
git pull #error: exit 1!
cd $(dirname $0) #good!
rm -f somefile #error: exit 1!
#############==========Etc.==============
Usage of the full paths does not help.
If the script has been executed itself, it works.
If run.sh manually, it also works.
If I run the command nohup home/user/server/websrv & if works as well.
However, if all this chain of tools is started by cron on boot, work.sh is not able to perform any command except of cp, pwd, which, etc. But invoke of ssh-add, git, cp, rm, make etc., forces exit 1 status of the script. Why it "does not see" the commands? Unfortunately, I also cannot get any extended log which might explain the particular errors.
Try adding the path from the session that runs the script correctly to the cron entry (or inside the script)
Get the current path (where the script runs fine) with echo $PATH and add that to the crontab: replacing the string below with the output -> <REPLACE_WITH_OUTPUT_FROM_ABOVE>
#reboot export PATH=$PATH:<REPLACE_WITH_OUTPUT_FROM_ABOVE>; /home/user/scripts/run.sh > /dev/null 2>&1
You can compare paths with a cron entry like this to see what cron's PATH is:
* * * * * echo $PATH > /tmp/crons_path
Then cat /tmp/crons_path to see what it says.
Example output:
$ crontab -l | grep -v \#
* * * * * echo $PATH >> /tmp/crons_path
# wait a minute or so...
$ cat /tmp/crons_path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
$ echo $PATH
/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
As the commenter above mentioned, crontab doesn't always use the same path as user so likely something is missing.
Be sure to remove the temp cron entry after testing (crontab -e, etc.)...

Output not showing all echo commands

I'm using a bash script which is run on serverA and connects to serverB to run a file.
The results are saved in a variable and then echo'd. However it doesn't echo all of the data.
The script on serverA is running:
count=$(sshpass -p password ssh -t -q user#serverB cd /home/tom && ./count.sh)
echo "Count: $count"
This echos: 341 not Count: 341
The count.sh script on serverB is looping through some folders and doing a count of files.
E.g.
total=0
count=$(ls -l | wc -l | xargs)
if [ "$count" > 0 ]; then
total=$(( total + count ))
fi
echo "$total"
How do I display the full echo on serverA?
You are attempting to run ./count.sh on the local machine, not the remote host. The && is a command separator that terminates the sshpass command. Use quotes to ensure your desired shell command is passed to the remote host.
count=$(sshpass -p password ssh -t -q user#serverB 'cd /home/tom && ./count.sh')
I don't see anyway of producing the reported output, unless count.sh can run locally but something (are you using set -e?) prevents the following echo from executing at all.

Synology task scheduler shell script not performing scp copy

I am trying to create a task on our Synology NAS that runs every night and fetches backups from a remote server and copy it to the NAS.
This is my shell script:
#!/bin/sh
WD=$(/bin/dirname $0)
Log=$WD/sync-error.log
/bin/echo "-------" >> $Log
/bin/echo "$(date +%Y-%m-%d--%R): " >> $Log
/bin/echo "Clearing all backups that are older than 60 days." >> $Log
/bin/echo "Starting calenso backup" >> $Log
/bin/echo "Making sure that directory '$(date +%Y-%m-%d)' exists" >> $Log
mkdir -p /volume1/home/$(date +%Y%m%d)
/bin/echo "Start copying files from nine.ch" >> $Log
/bin/scp -r www-data#xxx.xxx.com:/home/www-data/backup/$(date +%Y%m%d)/* /volume1/home/$(date +%Y%m%d)
/bin/echo "SCP is done" >> $Log
/bin/echo "Backup is done" >> $Log
/bin/echo "-------" >> $Log
When I run the script manually in the Terminal, then it works perfectly. However, when its started via task scheduler, the script runs, outputs data, but the scp-command is ignored.
Any help is appreciated!

CMD does not run if used after ENTRYPOINT

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
RUN apt-get update && apt-get -y install cron
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/name/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN chmod +x /usr/share/kafka-connect-script/plugins-config.sh
ENTRYPOINT [ "./etc/confluent/docker/run" ]
CMD ["/usr/share/kafka-connect-script/plugins-config.sh"]
And the bash file as this
#!/bin/bash
#script to configure kafka connect with plugins
# export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
# export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=60
echo "Waiting for Kafka Connect to start listening on localhost" >> log.log
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT" >> log.log
while [[ $(eval $curl_command) -eq 000 && $sleep_second_counter -lt $max_seconds_to_wait ]]
do
echo "In" >> log.log
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter" >> log.log
echo "Going to sleep for $sleep_second seconds" >> log.log
sleep $sleep_second
echo "Finished sleeping" >> log.log
((sleep_second_counter+=$sleep_second))
echo "Finished counter" >> log.log
done
echo "Out" >> log.log
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
/bin/bash
Entry point gets called correctly but CMD does not get invoked.
I also try to understand the solution given here CMD doesn't run after ENTRYPOINT in Dockerfile
but I did not understand the solution.
If someone could explain a bit more what is wrong here.
What I am trying to accomplish
I am trying to have a single docker container image which will start the kafka-connect server (ENTRYPOINT) and then via bash file (CMD) I will configure the plugins. Requirement is that the same sequence of steps gets executed everytime the containers restarts.
CMD is run after ENTRYPOINT, like parameters after a function invokation, in the same command line.
In your case you want two different commands running sequentially. Then, you may add them to a startup_script.sh whose content is:
#!/bin/bash
./etc/confluent/docker/run & # run in background not to get stuck in here
/usr/share/kafka-connect-script/plugins-config.sh # apply configuration
sleep 100000000 # to avoid the startup script to exit since that would kill the container

SCP loop stops executing after some time

So I have these two versions of the same script. Both are attempting to copy my profile to all the servers on my infra ( about 5k ). The problem I am having is that no matter which version I use, I always get the process stuck somewhere around 300 servers. It does not matter if I do it sequentially or in parallel, both version fail and both at a random server. I dont get any error message (Yes I know Im redirecting error messages to null now), it simply stops executing after reaching a random point close to 300 servers and it just lingers there doing nothing.
The best run I could get did it for about 357 servers.
Probably there is some detail I unknow that is causing this. Could someone advise?
Sequential
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" ./.bash_profile rouser#${server}:/home/rosuer/ && echo "$server - Done!" >> ./log.log || echo "$server - Failed!" >> ./log.log
done <<< "$( cat all_servers.txt )"
echo "$(date) - Process completed!!"
Parallel
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" ./.bash_profile rouser#${server}:/home/rosuer/ && echo "$server - Done!" >> ./log.log || echo "$server - Failed!" >> ./log.log &
done <<< "$( cat all_servers.txt )"
wait
echo "$(date) - Process completed!!"
Let's start with better input parsing. Instead of parsing a bash herestring from a posix command substitution via a while read loop, I've got the while read loop running through your server list directly via pipeline (this assumes one server per line in that file. I can fix this if that's not the case). If the contents of all_servers.txt was too long for a command line, you'd experience an error and/or premature termination.
I've also removed extraneous ./ items and I assume that rouser's home directory on each server is in fact /home/rouser (scp defaults to the home directory if given a relative path or no path at all).
Sequential
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" .bash_profile rouser#${server}: \
&& echo "$server - Done!" >> log.log \
|| echo "$server - Failed!" >> log.log
done < all_servers.txt
echo "$(date) - Process completed!!"
Parallel
For the Parallel solution, I've enclosed your conditional in parentheses just in case the pipeline was backgrounding the wrong process.
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
(
scp -B -o "StrictHostKeyChecking no" .bash_profile rouser#${server}: \
&& echo "$server - Done!" >> log.log
|| echo "$server - Failed!" >> log.log
) &
done < all_servers.txt
wait
echo "$(date) - Process completed!!"
SSH keys
I highly recommend learning more about SSH. The scp -B flag was unknown to me because I'm used to using SSH keys and ssh-agent, which will make such connectivity seamless (use passwordless keys if you're running this in a cron job).

Resources