cron not able to run the commands in shell - shell

I am trying to run the following cron job from bash (RHEL 7.4), An entry level postgres DB backup script I could write:
#!/bin/bash
# find latest file
echo $PATH
cd /home/postgres/log/
echo "------------ backup starts-----------"
latest_file=$( ls -t | head -n 1 | grep '\.log$' )
echo "latest file"
echo $latest_file
# find older files than above
echo "old file"
old_file=$( find . -maxdepth 1 -name "postgresql*" ! -newer $latest_file -mmin +1 )
if [ -f "$old_file" ]
then
echo $old_file
file_name=${old_file##*/}
echo "file name"
echo $file_name
# zip older file
tar czvf /home/postgres/log/archived_logs/$old_file.gz /home/postgres/log/$file_name
rm -rf /home/postgres/log/$file_name
else
echo "no old file found"
fi
Above is running correctly from shell and performing the intended tasks. It is also echoing needed info.
I have installed it with postgres user (not root) with crontab -e
*/2 * * * * /home/postgres/log/rollup.sh >> /home/postgres/log/logfile.csv 2>&1
It is correctly echoing (text which I have embedded for testing) but not the commands output to the .csv. Although it is not my concern. My concern is , it is not running those few commands at all.
I have given another try by changing the log file (.csv) path to /dev/null and commands in shell script are executing. I am not getting what I am missing here.
.csv file has 777 as permission , just to test

Related

shell script runs manually, but fails on crontab

I have written a shell script to copy a file from a remote server and convert the same file to another format. After conversion, i will edit the file using sed command .The script runs successfully when executed manually but fails when executed through crontab.
Crontab entry is:
*/1 * * * * /script/testshell.sh
Below is the shell script code:
#!/bin/bash
file="/script/test_data.csv" if [ -f "$file" ] then
echo " file is present in the local machine " else
echo " file is not present in the local machine "
echo " checking the file is present in the remote server "
ssh user#IP 'if [ -f /$path ]; then echo File found ; else echo File not found; fi' fi
if [ -f "$file"] then
rm -f test_data.csv fi
scp -i /server.pem user#IP:/$path
file="/script/test_data.csv" if [ -f "$file" ] then
echo "$file found." else
echo "$file not found." fi
if [ -f "$file" ] then echo " converting csv to json format ....." fi
./csvjson.sh input.csv output.json
sed -e '/^$/d' -e 's/.*/&,/; 1 i\[' ./output.json | sed ' $ a \]'
hello.json
After running the script manually, it works perfectly. But not working for crontab.
What doesn't work from cron, whats the output are there any errors from cron?
A few things to try:
cron doesn't run your profile by default so if your script needs anything set from it include it in the crontab command e.g.
". ./.bash_profile;/script/testshell.sh"
I can't see $path set anywhere, although its used to test for a file existing, are you setting that manually somewhere so won't be set from cron?.
Some of your scripts and files are specified as being in current dir (./), from cron that will be your home folder, is that right or do you need to Change directory in the script or use a path for them?
Hope that helps

Monitor and ftp newly-added files on Linux -- modify existing code

The OS is centos 7, I have a small application to implement below functionality:
1.Read information from config.ini like this:
# Configuration file for ftpxml service
# Remote FTP server informations
ftpadress=1.2.3.4
username=test
password=test
# Local folders configuration
# folderA: folder for incomming files
folderA=/u02/dump
# folderB: Successfuly transfered files are copied here
folderB=/u02/dump_bak
# retrydir: when ftp upload fails, store failed files in this
# directory
retrydir=/u02/dump_retry
Monitor folder A. If there are any newly-added files in A, do step 3.
Ftp these new files to a remote ftp server in the order of their creation time, While upload finished, copy uploaded files to folder B and delete relevant files in folder A.
If ftp fails, store relevant files in retrydir and try to ftp them later.
Record every operation in a log file.
Detailed setting instruction for the application:
install ncftp package: yum install ncftp -y, it's not a service nor a daemon, just a client tool which is invoked in bash file for ftp purpose.
Customize these files to suit your setting using vi: config.ini
,ftpmon.path and ftpmon.service
copy ftpmon.path and ftpmon.service to /etc/systemd/system/, copy config.ini and ftpxml.sh to /u02/ftpxml/, run: chmod +x ftpxml.sh
Start the monitoring tool
sudo systemctl start ftpmon.path
If you want to enable it at boot time just enter: sudo systemctl enable ftpmon.path
Setup a cron task to purge queued files (add option -p)
*/5 * * * * /u02/ftpxml/ftpxml.sh -p
Now the application seems works well, except a special situation:
When we put several files in folder A continuously, for instance, put 1.txt, 2.txt and 3.txt...... one after another in a short time, we usually found 1.txt ftp well, but the upcoming files fails to ftp and still stay under folder A.
Now I am going to fix this problem. I suppose the error maybe due to: while doing ftp for the first file, maybe the second file is already created under folder A. so the code can't care about the second file.
Below is code of ftpxml.sh:
#!/bin/bash
# ! Please read the README.txt file !
# Copy files from upload dir to remote FTP then save them
# to folderB
# look our location
SCRIPT=$(readlink -f $0)
# Absolute path to this script
SCRIPTPATH=`dirname $SCRIPT`
PIDFILE=${SCRIPTPATH}/ftpmon_prog.lock
# load config.ini
if [ -f $SCRIPTPATH/config.ini ]; then
source $SCRIPTPATH/config.ini
else
echo "No config found. Exiting"
fi
# Lock to avoid multiple instances
if [ -f $PIDFILE ]; then
kill -0 $(cat $PIDFILE) 2> /dev/null
if [ $? == 0 ]; then
exit
fi
fi
# Store PID in lock file
echo $$ > $PIDFILE
# Parse cmdline arguments
while getopts ":ph" opt; do
case $opt in
p)
#we set the purge mode (cron mode)
purge_only=1
;;
\?|h)
echo "Help text"
exit 1
;;
esac
done
# declare usefull functions
# common logging function
function logmsg() {
LOGFILE=ftp_upload_`date +%Y%m%d`.log
echo $(date +%m-%d-%Y\ %H:%M:%S) $* >> $SCRIPTPATH/log/${LOGFILE}
}
# Upload to remote FTP
# we use ncftpput to batch silently
# $1 file to upload $2 return value placeholder
function upload() {
ncftpput -V -u $username -p $password $ftpadress /prog/ $1
return $?
}
function purge_retry() {
failed_files=$(ls -1 -rt ${retrydir}/*)
if [ -z $failed_files ]; then
return
fi
while read line
do
#upload ${retrydir}/$line
upload $line
if [ $? != 0 ]; then
# upload failed we exit
exit
fi
logmsg File $line Uploaded from ${retrydir}
mv $line $folderB
logmsg File $line Copyed from ${retrydir}
done <<< "$failed_files"
}
# first check out 'queue' directory
purge_retry
# if called from a cron task we are done
if [ $purge_only ]; then
rm -f $PIDFILE
exit 0
fi
# look in incoming dir
new_files=$(ls -1 -rt ${folderA}/*)
while read line
do
# launch upload
if [ Z$line == 'Z' ]; then
break
fi
#upload ${folderA}/$line
upload $line
if [ $? == 0 ]; then
logmsg File $line Uploaded from ${folderA}
else
# upload failed we cp to retry folder
echo upload failed
cp $line $retrydir
fi
# don't care upload successfull or failed, we ALWAYS move the file to folderB
mv $line $folderB
logmsg File $line Copyed from ${folderA}
done <<< "$new_files"
# clean exit
rm -f $PIDFILE
exit 0
below is content of ftpmon.path:
[Unit]
Description= Triggers the service that logs changes.
Documentation= man:systemd.path
[Path]
# Enter the path to monitor (/u02/dump)
PathModified=/u02/dump/
[Install]
WantedBy=multi-user.target
below is content of ftpmon.service:
[Unit]
Description= Starts File Upload monitoring
Documentation= man:systemd.service
[Service]
Type=oneshot
#Set here the user that ftpmxml.sh will run as
User=root
#Set the exact path to the script
ExecStart=/u02/ftpxml/ftpxml.sh
[Install]
WantedBy=default.target
Thanks in advance, hope any experts can give me some suggestion.
As you remove the successfull transfered files from A you can leave files with transfer errors in A. So I am dealing only with files in one folder.
List your files by creation time with
find -type f -maxdepth 1 -print0 | xargs -r0 stat -c %y\ %n | sort
if you want hidden files to be included or - if not -
find -type f -maxdepth 0 -print0 | xargs -r0 stat -c %y\ %n | sort
You'll get something like
2016-02-19 18:53:41.000000000 ./.dockerenv
2016-02-19 18:53:41.000000000 ./.dockerinit
2016-02-19 18:56:09.000000000 ./versions.txt
2016-02-19 19:01:44.000000000 ./test.sh
Now cut the filenames (or use xargs -r0 stat -c %n if it does not matter that the files are order by name instead the timestamp) and
do the transfer
check the success
move successfully transfered files to B
As you stated above, there are situations where newly stored files are not successfully transfered. This may be if the file is written further after you started the transfer. So filter the timestamp to be at least some time old. Add -mmin -1 to the find statement for "at least one minute old"
find -type f -maxdepth 0 -mmin -1 -print0 | xargs -r0 stat -c %n | sort
If you don't want to use a minute file age you'll have to check if the file is still open: lsof | grep ./testfile but this may have issues if you have tmpfs in your file system.
lsof | grep ./testfile
lsof: WARNING: can't stat() tmpfs file system /var/lib/docker/containers/8596cd310292a54652c7f50d7315c8390703b4816442146b340946779a72a40c/shm
Output information may be incomplete.
lsof: WARNING: can't stat() proc file system /run/docker/netns/fb9323486c44
Output information may be incomplete.
So add %s to the stats statement to check the file size twice within some seconds and if it's constant the file may be written complete. May, as the write process may be stalled.

Bash: Check if remote directory exists using FTP

I'm writing a bash script to send files from a linux server to a remote Windows FTP server.
I would like to check using FTP if the folder where the file will be stored exists before attempting to create it.
Please note that I cannot use SSH nor SCP and I cannot install new scripts on the linux server. Also, for performance issues, I would prefer if checking and creating the folders is done using only one FTP connection.
Here's the function to send the file:
sendFile() {
ftp -n $FTP_HOST <<! >> ${LOCAL_LOG}
quote USER ${FTP_USER}
quote PASS ${FTP_PASS}
binary
$(ftp_mkdir_loop "$FTP_PATH")
put ${FILE_PATH} ${FTP_PATH}/${FILENAME}
bye
!
}
And here's what ftp_mkdir_loop looks like:
ftp_mkdir_loop() {
local r
local a
r="$#"
while [[ "$r" != "$a" ]]; do
a=${r%%/*}
echo "mkdir $a"
echo "cd $a"
r=${r#*/}
done
}
The ftp_mkdir_loop function helps in creating all the folders in $FTP_PATH (Since I cannot do mkdir -p $FTP_PATH through FTP).
Overall my script works but is not "clean"; this is what I'm getting in my log file after the execution of the script (yes, $FTP_PATH is composed of 5 existing directories):
(directory-name) Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
To solve this, do as follows:
To ensure that you only use one FTP connection, you create the input (FTP commands) as an output of a shell script
E.g.
$ cat a.sh
cd /home/test1
mkdir /home/test1/test2
$ ./a.sh | ftp $Your_login_and_server > /your/log 2>&1
To allow the FTP to test if a directory exists, you use the fact that "DIR" command has an option to write to file
# ...continuing a.sh
# In a loop, $CURRENT_DIR is the next subdirectory to check-or-create
echo "DIR $CURRENT_DIR $local_output_file"
sleep 5 # to leave time for the file to be created
if (! -s $local_output_file)
then
echo "mkdir $CURRENT_DIR"
endif
Please note that "-s" test is not necessarily correct - I don't have acccess to ftp now and don't know what the exact output of running DIR on non-existing directory will be - cold be empty file, could be a specific error. If error, you can grep the error text in $local_output_file
Now, wrap the step #2 into a loop over your individual subdirectories in a.sh
#!/bin/bash
FTP_HOST=prep.ai.mit.edu
FTP_USER=anonymous
FTP_PASS=foobar#example.com
DIRECTORY=/foo # /foo does not exist, /pub exists
LOCAL_LOG=/tmp/foo.log
ERROR="Failed to change directory"
ftp -n $FTP_HOST << EOF | tee -a ${LOCAL_LOG} | grep -q "${ERROR}"
quote USER ${FTP_USER}
quote pass ${FTP_PASS}
cd ${DIRECTORY}
EOF
if [[ "${PIPESTATUS[2]}" -eq 1 ]]; then
echo ${DIRECTORY} exists
else
echo ${DIRECTORY} does not exist
fi
Output:
/foo does not exist
If you want to suppress only the messages in ${LOCAL_LOG}:
ftp -n $FTP_HOST <<! | grep -v "Cannot create a file" >> ${LOCAL_LOG}

Run wget and other commands in shell script

I'm trying to create a shell script that I will download the latest Atomic gotroot rules to my server, unpack them, copy them to the correct folder, etc.,
I've been reading shell tutorials and forum posts for most of the day and the syntax escapes me for some of these. I have run all these commands and I know they work if I manually run them.
I know I need to develop some error checking, but I'm just trying to get the commands to run correctly. The main problem at the moment is the syntax of the wget commands, i've got errors about missing semi-colons, divide by zero, unsupported schemes - I've tried various quoting (single and double) and escaping - / " characters in various combinations.
Thanks for any help.
The raw wget command is
wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"
#!/bin/sh
update_modsec_rules(){
wget=/usr/bin/wget
tar=/bin/tar
apachectl=/usr/bin/apache2ctl
TXT="Script Run Finished"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES="/var/asl/updates/modsec/*"
EXISTING_ARCH="/var/asl/updates/modsec-*"
WGET_OPTS='--user=jim --password=xxx-yyy-zzz'
URL_BASE="http://updates.atomicorp.com/channels/rules/subscription"
# change to working directory and cleanup any downloaded files and extracted rules in modsec/ directory
cd $WORKING_DIR
rm -f $EXISTING_ARCH
rm -f $EXISTING_FILES
rm -f VERSION*
# wget to download VERSION file
$wget ${WGET_OPTS} "${URL_BASE}/VERSION"
# get current MODSEC_VERSION from VERSION file and save as variable
source VERSION
TARGET_DATE=$MODSEC_VERSION
echo $TARGET_DATE
# wget to download current archive
$wget ${WGET_OPTS} "${URL_BASE}/modsec-${TARGET_DATE}.tar.gz"
# extract archive
echo "extracting files . . . "
tar zxvf $WORKING_DIR/modsec-${TARGET_DATE}.tar.gz
echo "copying files . . . "
cp -uv $EXISTING_FILES $TARGET_DIR
echo $TXT
}
update_modsec_rules $# 2>&1 | tee -a /var/asl/modsec_update.log
RESTART_APACHE="/usr/local/cpanel/scripts/restartsrv httpd"
$RESTART_APACHE
Here are some guidelines to use when writing shell scripts.
Always quote variables when you use them. This helps avoid the possibility of misinterpretation. (What if a filename contains a space?)
Don't trust fileglobbing on commands like rm. Use for loops instead. (What if a filename starts with a hyphen?)
Avoid subshells when possible. Your lines with backquotes make me itchy.
Don't exec if you can help it. And especially don't expect any parts of your script after your exec to actually get run.
I should point out that while your shell may be bash, you've specified /bin/sh for execution of this script, so it is NOT a bash script.
Here's a rewrite with some error checking. Add salt to taste.
#!/bin/sh
# Linux
wget=/usr/bin/wget
tar=/bin/tar
apachectl=/usr/sbin/apache2ctl
# FreeBSD
#wget=/usr/local/bin/wget
#tar=/usr/bin/tar
#apachectl=/usr/local/sbin/apachectl
TXT="GOT TO THE END, YEAH"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES_DIR="/var/asl/updates/modsec/"
EXISTING_ARCH="/var/asl/updates/"
URL_BASE="http://updates.atomicorp.com/channels/rules/subscription"
WGET_OPTS='--user="jim" --password="xxx-yyy-zzz"'
if [ ! -x "$wget" ]; then
echo "ERROR: No wget." >&2
exit 1
elif [ ! -x "$apachectl" ]; then
echo "ERROR: No apachectl." >&2
exit 1
elif [ ! -x "$tar" ]; then
echo "ERROR: Not in Kansas anymore, Toto." >&2
exit 1
fi
# change to working directory and cleanup any downloaded files
# and extracted rules in modsec/ directory
if ! cd "$WORKING_DIR"; then
echo "ERROR: can't access working directory ($WORKING_DIR)" >&2
exit 1
fi
# Delete each file in a loop.
for file in "$EXISTING_FILES_DIR"/* "$EXISTING_ARCH_DIR"/modsec-*; do
rm -f "$file"
done
# Move old VERSION out of the way.
mv VERSION VERSION-$$
# wget1 to download VERSION file (replaces WGET1)
if ! $wget $WGET_OPTS $URL_BASE}/VERSION; then
echo "ERROR: can't get VERSION" >&2
mv VERSION-$$ VERSION
exit 1
fi
# get current MODSEC_VERSION from VERSION file and save as variable,
# but DON'T blindly trust and run scripts from an external source.
if grep -q '^MODSEC_VERSION=' VERSION; then
TARGET_DATE="`sed -ne '/^MODSEC_VERSION=/{s/^[^=]*=//p;q;}' VERSION`"
echo "Target date: $TARGET_DATE"
fi
# Download current archive (replaces WGET2)
if ! $wget ${WGET_OPTS} "${URL_BASE}/modsec-$TARGET_DATE.tar.gz"; then
echo "ERROR: can't get archive" >&2
mv VERSION-$$ VERSION # Do this, don't do this, I don't know your needs.
exit 1
fi
# extract archive
if [ ! -f "$WORKING_DIR/modsec-${TARGET_DATE}.tar.gz" ]; then
echo "ERROR: I'm confused, where's my archive?" >&2
mv VERSION-$$ VERSION # Do this, don't do this, I don't know your needs.
exit 1
fi
tar zxvf "$WORKING_DIR/modsec-${TARGET_DATE}.tar.gz"
for file in "$EXISTING_FILES_DIR"/*; do
cp "$file" "$TARGET_DIR/"
done
# So far so good, so let's restart apache.
if $apachectl configtest; then
if $apachectl restart; then
# Success!
rm -f VERSION-$$
echo "$TXT"
else
echo "ERROR: PANIC! Apache didn't restart. Notify the authorities!" >&2
exit 3
fi
else
echo "ERROR: Apache configs are broken. We're still running, but you'd better fix this ASAP." >&2
exit 2
fi
Note that while I've rewritten this to be more sensible, there is certainly still a lot of room for improvement.
You have two options:
1- changing this to
WGET1=' --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"'
then run
wget $WGET1 same to WGET2
Or
2- encapsulating $WGET1 with backquotes ``.
e.g.:
`$WGET`
This applies to any command your executing out of a variable.
Suggested changes:
#!/bin/sh
TXT="GOT TO THE END, YEAH"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES="/var/asl/updates/modsec/*"
EXISTING_ARCH="/var/asl/updates/modsec-*"
WGET1='wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"'
WGET2='wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/modsec-$TARGET_DATE.tar.gz"'
## change to working directory and cleanup any downloaded files and extracted rules in modsec/ directory
cd $WORKING_DIR
rm -f $EXISTING_ARCH
rm -f $EXISTING_FILES
## wget1 to download VERSION file
`$WGET1`
## get current MODSEC_VERSION from VERSION file and save as variable
source VERSION
TARGET_DATE=`echo $MODSEC_VERSION`
## WGET2 command to download current archive
`$WGET2`
## extract archive
tar zxvf $WORKING_DIR/modsec-$TARGET_DATE.tar.gz
cp $EXISTING_FILES $TARGET_DIR
## restart server
exec '/usr/local/cpanel/scripts/restartsrv_httpd' $*;
Pro Tip: If you need string substitution, using ${VAR} is much better to eliminate ambiguity, e.g.:
tar zxvf $WORKING_DIR/modsec-${TARGET_DATE}.tar.gz

Deleting Inactive 5 days Old Files

My aim is to delete more than 5 days old files which are no longer used by any process.
As a starter I have written following script, but it does not work, says line 10 command not found.
HOME=~/var
cd $HOME
for f in `find . -type f`; do
if [`lsof -n $f`]; then
echo $f
fi
done
hmm ye its not being done properly try this:
#!/bin/bash
DIR=$HOME/var
##########################################################
## files older than 5 days and recursive value set to 1
# for f in $(find $DIR -mtime +5 -maxdepth 1 -type f); do
##########################################################
for f in $(find $DIR -type f); do
# run lsof and look for pattern a-z send it to dev null
lsof -n $f |grep [a-z] > /dev/null
# If it was found the exit status will be 0 or success
if [ $? = 0 ]; then
echo "$f in use -->"
else
echo "File $f not in use"
fi
done
In your script you had defined HOME as ~/var - using squigly line in scripting I would try and stay away from. Secondly you were changing an environment variable's value within your script.. Try from your command line
env|grep HOME
This new method is a lot cleaner
Now here is another pointer that may mean you need to make further changes...
Will your script be running in a cron job ? will it be running as a cron entry as this existing user ? if it is set for root to run then above will fail I will show you how:
echo $HOME
/home/myuser
sudo -i
echo $HOME
/root
notice how the ~ or $HOME value for home has changed.. so if you do decide to run it as a cron entry as another user then try this
scriptuser="your_user"
getent passwd $scriptuser|awk -F":" '{print $6}'
if it is the current user that is then sudo su - or sudo -i then executing script then try :
getent passwd $(logname)|awk -F":" '{print $6}'

Resources