Synology task scheduler shell script not performing scp copy - shell

I am trying to create a task on our Synology NAS that runs every night and fetches backups from a remote server and copy it to the NAS.
This is my shell script:
#!/bin/sh
WD=$(/bin/dirname $0)
Log=$WD/sync-error.log
/bin/echo "-------" >> $Log
/bin/echo "$(date +%Y-%m-%d--%R): " >> $Log
/bin/echo "Clearing all backups that are older than 60 days." >> $Log
/bin/echo "Starting calenso backup" >> $Log
/bin/echo "Making sure that directory '$(date +%Y-%m-%d)' exists" >> $Log
mkdir -p /volume1/home/$(date +%Y%m%d)
/bin/echo "Start copying files from nine.ch" >> $Log
/bin/scp -r www-data#xxx.xxx.com:/home/www-data/backup/$(date +%Y%m%d)/* /volume1/home/$(date +%Y%m%d)
/bin/echo "SCP is done" >> $Log
/bin/echo "Backup is done" >> $Log
/bin/echo "-------" >> $Log
When I run the script manually in the Terminal, then it works perfectly. However, when its started via task scheduler, the script runs, outputs data, but the scp-command is ignored.
Any help is appreciated!

Related

CMD does not run if used after ENTRYPOINT

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
RUN apt-get update && apt-get -y install cron
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/name/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN chmod +x /usr/share/kafka-connect-script/plugins-config.sh
ENTRYPOINT [ "./etc/confluent/docker/run" ]
CMD ["/usr/share/kafka-connect-script/plugins-config.sh"]
And the bash file as this
#!/bin/bash
#script to configure kafka connect with plugins
# export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
# export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=60
echo "Waiting for Kafka Connect to start listening on localhost" >> log.log
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT" >> log.log
while [[ $(eval $curl_command) -eq 000 && $sleep_second_counter -lt $max_seconds_to_wait ]]
do
echo "In" >> log.log
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter" >> log.log
echo "Going to sleep for $sleep_second seconds" >> log.log
sleep $sleep_second
echo "Finished sleeping" >> log.log
((sleep_second_counter+=$sleep_second))
echo "Finished counter" >> log.log
done
echo "Out" >> log.log
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
/bin/bash
Entry point gets called correctly but CMD does not get invoked.
I also try to understand the solution given here CMD doesn't run after ENTRYPOINT in Dockerfile
but I did not understand the solution.
If someone could explain a bit more what is wrong here.
What I am trying to accomplish
I am trying to have a single docker container image which will start the kafka-connect server (ENTRYPOINT) and then via bash file (CMD) I will configure the plugins. Requirement is that the same sequence of steps gets executed everytime the containers restarts.
CMD is run after ENTRYPOINT, like parameters after a function invokation, in the same command line.
In your case you want two different commands running sequentially. Then, you may add them to a startup_script.sh whose content is:
#!/bin/bash
./etc/confluent/docker/run & # run in background not to get stuck in here
/usr/share/kafka-connect-script/plugins-config.sh # apply configuration
sleep 100000000 # to avoid the startup script to exit since that would kill the container

Need to print current CPU usage and Memory usage in file continuously

I have prepared the below script, but it's not adding any data to the output file.
My intention is to get the current CPU usage and Memory usage and print them on log file.
What is wrong with my below script? I will run this script file in CentOS machine.
#!/usr/bin/bash
HOSTNAME=$(hostname)
mkdir -p /root/scripts
LOGFILE=/root/scripts/xcpuusagehistory.log
touch $LOGFILE
a=0;
b=1;
while [ "$a" -lt "$b" ]
do
CPULOAD=`top -d10 | grep "Cpu(s)"`
echo "$CPULOAD on Host $HOSTNAME" >> $LOGFILE
done
while true
do
cpu_load="$(top -b -n1 -d10 | grep "Cpu(s)")"
echo "$cpu_load on Host $HOSTNAME" >> "$log_file"
sleep 1
done
See top batch mode (-b) in the man page.

SCP loop stops executing after some time

So I have these two versions of the same script. Both are attempting to copy my profile to all the servers on my infra ( about 5k ). The problem I am having is that no matter which version I use, I always get the process stuck somewhere around 300 servers. It does not matter if I do it sequentially or in parallel, both version fail and both at a random server. I dont get any error message (Yes I know Im redirecting error messages to null now), it simply stops executing after reaching a random point close to 300 servers and it just lingers there doing nothing.
The best run I could get did it for about 357 servers.
Probably there is some detail I unknow that is causing this. Could someone advise?
Sequential
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" ./.bash_profile rouser#${server}:/home/rosuer/ && echo "$server - Done!" >> ./log.log || echo "$server - Failed!" >> ./log.log
done <<< "$( cat all_servers.txt )"
echo "$(date) - Process completed!!"
Parallel
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" ./.bash_profile rouser#${server}:/home/rosuer/ && echo "$server - Done!" >> ./log.log || echo "$server - Failed!" >> ./log.log &
done <<< "$( cat all_servers.txt )"
wait
echo "$(date) - Process completed!!"
Let's start with better input parsing. Instead of parsing a bash herestring from a posix command substitution via a while read loop, I've got the while read loop running through your server list directly via pipeline (this assumes one server per line in that file. I can fix this if that's not the case). If the contents of all_servers.txt was too long for a command line, you'd experience an error and/or premature termination.
I've also removed extraneous ./ items and I assume that rouser's home directory on each server is in fact /home/rouser (scp defaults to the home directory if given a relative path or no path at all).
Sequential
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" .bash_profile rouser#${server}: \
&& echo "$server - Done!" >> log.log \
|| echo "$server - Failed!" >> log.log
done < all_servers.txt
echo "$(date) - Process completed!!"
Parallel
For the Parallel solution, I've enclosed your conditional in parentheses just in case the pipeline was backgrounding the wrong process.
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
(
scp -B -o "StrictHostKeyChecking no" .bash_profile rouser#${server}: \
&& echo "$server - Done!" >> log.log
|| echo "$server - Failed!" >> log.log
) &
done < all_servers.txt
wait
echo "$(date) - Process completed!!"
SSH keys
I highly recommend learning more about SSH. The scp -B flag was unknown to me because I'm used to using SSH keys and ssh-agent, which will make such connectivity seamless (use passwordless keys if you're running this in a cron job).

Commands won't write to file when script is executed by cron

I used crontab -e to schedule the execution of a shell script that does ssh calls to a list of servers and gets information and prints to file. The output of crontab -l is:
SHELL = /bin/sh
PATH = /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
* 1 * * 1,2,3,4,5 /bin/bash /Users/cjones/Documents/Development/Scripts/DailyStatus.sh
The script I am running logs to files the output of echo "Beginning remote connections..." >> $logfile however does not log to a file the output of the following loop:
for servers in $(cat hostnames.txt); do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done
Pastebin of the full script: http://pastebin.com/3vD7Bba0
Additional note this script pushes the latest version of a management script then ssh's into the remote server to execute and capture the ouput. This work 100% of the time when ran manually. Any assitance would be helpful thanks!
you need to do SHELL=/bin/sh and the same with PATH. The spaces around = are wrong.
Also, use full paths when calling files in your script when you call it with crontab:
From
for servers in $(cat hostnames.txt); do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done
to
while read $servers
do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done < /path/to/hostnames.txt
^^^^^^^^^
Note the usage of while read; do ... done < file instead of the unnecessary for host in $(cat ...).

run inotifywait on background

I have this code copied from linuxaria.com as example and work just fine on my case the problem is when I exit from terminal inotifywait stop. I want run on back ground even after exit the terminal. how I can do that?
#!/bin/sh
# CONFIGURATION
DIR="/tmp"
EVENTS="create"
FIFO="/tmp/inotify2.fifo"
on_event() {
local date=$1
local time=$2
local file=$3
sleep 5
echo "$date $time Fichier créé: $file"
}
# MAIN
if [ ! -e "$FIFO" ]
then
mkfifo "$FIFO"
fi
inotifywait -m -e "$EVENTS" --timefmt '%Y-%m-%d %H:%M:%S' --format '%T %f' "$DIR" > "$FIFO" &
INOTIFY_PID=$!
while read date time file
do
on_event $date $time $file &
done < "$FIFO"
You can run the script with screen or nohup but I'm not sure how that would help since the script does not appear to log its output to any file.
nohup bash script.sh </dev/null >/dev/null 2>&1 &
Or
screen -dm bash script.sh </dev/null >/dev/null 2>&1 &
Disown could also apply:
bash script.sh </dev/null >/dev/null 2>&1 & disown
You should just test which one would not allow the command to suspend or hang up when the terminal exits.
If you want to log the output to a file, you can try these versions:
nohup bash script.sh </dev/null >/path/to/logfile 2>&1 &
screen -dm bash script.sh </dev/null >/path/to/logfile 2>&1 &
bash script.sh </dev/null >/path/to/logfile 2>&1 & disown
I made a 'service' out of it. So I could stop/start it like a normal service and also it would start after a reboot:
This was made on a Centos distro So I'm not if it works on others right away.
Create a file with execute right on in the service directory
/etc/init.d/servicename
#!/bin/bash
# chkconfig: 2345 90 60
case "$1" in
start)
nohup SCRIPT.SH > /dev/null 2>&1 &
echo $!>/var/run/SCRIPT.SH.pid
;;
stop)
pkill -P `cat /var/run/SCRIPT.SH.pid`
rm /var/run/SCRIPT.SH.pid
;;
restart)
$0 stop
$0 start
;;
status)
if [ -e /var/run/SCRIPT.SH.pid ]; then
echo SCRIPT.SH is running, pid=`cat /var/run/SCRIPT.SH.pid`
else
echo SCRIPT.SH is not running
exit 1
fi
;;
*)
echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0
Everything in caps you should change to what your script name.
The line # chkconfig: 2345 90 60 makes it possible to start the service when the system is rebooted. this probably doens't work in ubuntu like distro's.
The best way I found is to create a systemd service.
Create systemd file in /lib/systemd/system/checkfile.service:
sudo vim /lib/systemd/system/checkfile.service
And paste this there:
[Unit]
Description = Run inotifywait in backgoround
[Service]
User=ubuntu
Group=ubuntu
ExecStart=/bin/bash /path_to/script.sh
RestartSec=10
[Install]
WantedBy=multi-user.target
and in /path_to/script.sh, you can have this:
inotifywait -m /path-to-dir -e create -e moved_to |
while read dir action file; do
echo "The file '$file' appeared in directory '$dir' via '$action'" >> /dir/event.txt
done
Make sure that your file is executable by the user:
sudo chmod +x /path_to/script.sh
After creating two files, reload systemd manager configuration with:
sudo systemctl daemon-reload
Now you can use start/stop/enable to your script:
sudo systemctl enable checkfile
sudo systemctl start checkfile
Make sure to replace file/directory/user/group values before executing.
replace -m with
-d -o /dev/null
ie:
inotifywait -d -o /dev/null -e "$EVENTS" --timefmt '%Y-%m-%d %H:%M:%S' --format '%T %f' >"$DIR" > "$FIFO" & INOTIFY_PID=$!
You can check the inotifywait help manual at:
https://helpmanual.io/help/inotifywait/
Method that will work even if the file to be watched is not there yet, or gets deleted in between (just watch the whole directory instead of a single file, and then do the action on a particular file):
nohup inotifywait -m -e close_write /var/opt/some_directory/ |
while read -r directory events filename; do
if [ "$filename" = "file_to_be_watched.log" ]; then
# do your stuff here; I'm just printing the events to file
echo "$events" >> /tmp/events.log
fi
done &

Resources