shell script runs manually, but fails on crontab - shell

I have written a shell script to copy a file from a remote server and convert the same file to another format. After conversion, i will edit the file using sed command .The script runs successfully when executed manually but fails when executed through crontab.
Crontab entry is:
*/1 * * * * /script/testshell.sh
Below is the shell script code:
#!/bin/bash
file="/script/test_data.csv" if [ -f "$file" ] then
echo " file is present in the local machine " else
echo " file is not present in the local machine "
echo " checking the file is present in the remote server "
ssh user#IP 'if [ -f /$path ]; then echo File found ; else echo File not found; fi' fi
if [ -f "$file"] then
rm -f test_data.csv fi
scp -i /server.pem user#IP:/$path
file="/script/test_data.csv" if [ -f "$file" ] then
echo "$file found." else
echo "$file not found." fi
if [ -f "$file" ] then echo " converting csv to json format ....." fi
./csvjson.sh input.csv output.json
sed -e '/^$/d' -e 's/.*/&,/; 1 i\[' ./output.json | sed ' $ a \]'
hello.json
After running the script manually, it works perfectly. But not working for crontab.

What doesn't work from cron, whats the output are there any errors from cron?
A few things to try:
cron doesn't run your profile by default so if your script needs anything set from it include it in the crontab command e.g.
". ./.bash_profile;/script/testshell.sh"
I can't see $path set anywhere, although its used to test for a file existing, are you setting that manually somewhere so won't be set from cron?.
Some of your scripts and files are specified as being in current dir (./), from cron that will be your home folder, is that right or do you need to Change directory in the script or use a path for them?
Hope that helps

Related

cron not able to run the commands in shell

I am trying to run the following cron job from bash (RHEL 7.4), An entry level postgres DB backup script I could write:
#!/bin/bash
# find latest file
echo $PATH
cd /home/postgres/log/
echo "------------ backup starts-----------"
latest_file=$( ls -t | head -n 1 | grep '\.log$' )
echo "latest file"
echo $latest_file
# find older files than above
echo "old file"
old_file=$( find . -maxdepth 1 -name "postgresql*" ! -newer $latest_file -mmin +1 )
if [ -f "$old_file" ]
then
echo $old_file
file_name=${old_file##*/}
echo "file name"
echo $file_name
# zip older file
tar czvf /home/postgres/log/archived_logs/$old_file.gz /home/postgres/log/$file_name
rm -rf /home/postgres/log/$file_name
else
echo "no old file found"
fi
Above is running correctly from shell and performing the intended tasks. It is also echoing needed info.
I have installed it with postgres user (not root) with crontab -e
*/2 * * * * /home/postgres/log/rollup.sh >> /home/postgres/log/logfile.csv 2>&1
It is correctly echoing (text which I have embedded for testing) but not the commands output to the .csv. Although it is not my concern. My concern is , it is not running those few commands at all.
I have given another try by changing the log file (.csv) path to /dev/null and commands in shell script are executing. I am not getting what I am missing here.
.csv file has 777 as permission , just to test

bad interpreter no such file or directory

Thank you for reviewing my shell script i am producing this error but not sure why it is not running. I am a noob when it comes to shell scripting. Please help. Here is my code:
#!/bin/bash
#This script creates the log files based on the current date and hour
#Variables for managing the logs
LOG_DIRECTORY=/var/log; export LOG_DIRECTORY
LOG_DIRECTORY_FILE=/var/log/secure; export LOG_DIRECTORY_FILE
MY_LOG_DIRECTORY=$LOG_DIRECTORY/mylogs; export MY_LOG_DIRECTORY
MY_LOG_FILE=$MY_LOG_DIRECTORY/mylog-`date +%m-%d-%H`; export MY_LOG_FILE
EXPRESSTION=`date '+%b %d %H'`; export MY_LOG_FILE
#Checks if mylog directory exists.If not, then creates it
if [ ! -d "$MY_LOG_DIRECTORY" ]; then
mkdir -p $MY_LOG_DIRECTORY
fi
#Scripts exits successfully, If the log already exists
if [ -f "$MY_LOG_FILE" ]; then
echo "Log file already exists. Nothing is written to log";
exit 0;
fi
#grep the contents to the log file
grep "^$EXPRESSTION" $LOG_DIRECTORY_FILE >> $MY_LOG_FILE
echo "New myLog file created successfully"
Your script needs to be saved as a UNIX text file.
Try running dos2unix on it, or open it up in vim and run :set fileformat=unix and save.
If you don't have dos2unix, and aren't comfortable with vim, you can use perl:
perl -pi -e 's/\r\n?/\n/g' your-script-filename
"bad interpreter no such file or directory" error indicates /bin/bash does not exist.
Try to run script as
$ sh log.sh
OR use some other available shell interpreter (e.g /bin/sh , /bin/ksh) instead of /bin/bash

Bash: Check if remote directory exists using FTP

I'm writing a bash script to send files from a linux server to a remote Windows FTP server.
I would like to check using FTP if the folder where the file will be stored exists before attempting to create it.
Please note that I cannot use SSH nor SCP and I cannot install new scripts on the linux server. Also, for performance issues, I would prefer if checking and creating the folders is done using only one FTP connection.
Here's the function to send the file:
sendFile() {
ftp -n $FTP_HOST <<! >> ${LOCAL_LOG}
quote USER ${FTP_USER}
quote PASS ${FTP_PASS}
binary
$(ftp_mkdir_loop "$FTP_PATH")
put ${FILE_PATH} ${FTP_PATH}/${FILENAME}
bye
!
}
And here's what ftp_mkdir_loop looks like:
ftp_mkdir_loop() {
local r
local a
r="$#"
while [[ "$r" != "$a" ]]; do
a=${r%%/*}
echo "mkdir $a"
echo "cd $a"
r=${r#*/}
done
}
The ftp_mkdir_loop function helps in creating all the folders in $FTP_PATH (Since I cannot do mkdir -p $FTP_PATH through FTP).
Overall my script works but is not "clean"; this is what I'm getting in my log file after the execution of the script (yes, $FTP_PATH is composed of 5 existing directories):
(directory-name) Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
To solve this, do as follows:
To ensure that you only use one FTP connection, you create the input (FTP commands) as an output of a shell script
E.g.
$ cat a.sh
cd /home/test1
mkdir /home/test1/test2
$ ./a.sh | ftp $Your_login_and_server > /your/log 2>&1
To allow the FTP to test if a directory exists, you use the fact that "DIR" command has an option to write to file
# ...continuing a.sh
# In a loop, $CURRENT_DIR is the next subdirectory to check-or-create
echo "DIR $CURRENT_DIR $local_output_file"
sleep 5 # to leave time for the file to be created
if (! -s $local_output_file)
then
echo "mkdir $CURRENT_DIR"
endif
Please note that "-s" test is not necessarily correct - I don't have acccess to ftp now and don't know what the exact output of running DIR on non-existing directory will be - cold be empty file, could be a specific error. If error, you can grep the error text in $local_output_file
Now, wrap the step #2 into a loop over your individual subdirectories in a.sh
#!/bin/bash
FTP_HOST=prep.ai.mit.edu
FTP_USER=anonymous
FTP_PASS=foobar#example.com
DIRECTORY=/foo # /foo does not exist, /pub exists
LOCAL_LOG=/tmp/foo.log
ERROR="Failed to change directory"
ftp -n $FTP_HOST << EOF | tee -a ${LOCAL_LOG} | grep -q "${ERROR}"
quote USER ${FTP_USER}
quote pass ${FTP_PASS}
cd ${DIRECTORY}
EOF
if [[ "${PIPESTATUS[2]}" -eq 1 ]]; then
echo ${DIRECTORY} exists
else
echo ${DIRECTORY} does not exist
fi
Output:
/foo does not exist
If you want to suppress only the messages in ${LOCAL_LOG}:
ftp -n $FTP_HOST <<! | grep -v "Cannot create a file" >> ${LOCAL_LOG}

How to change parameter in a file, only if the file exists and the parameter is not already set?

#!/bin/bash
# See if registry is set to expire updates
filename=hostnames
> test.log
PARAMETER=Updates
FILE=/etc/.properties
CODE=sudo if [ ! -f $FILE] && grep $PARAMETER $FILE; then echo "File found, parameter not found."
#CODE=grep $PARAMETER $FILE || sudo tee -a /etc/.properties <<< $PARAMETER
while read -r -a line
do
hostname=${line//\"}
echo $hostname":" >> test.log
#ssh -n -t -t $hostname "$CODE" >> test.log
echo $CODE;
done < "$filename"
exit
I want to set "Updates 30" in /etc/.properties on about 50 servers if:
The file exists (not all servers have the software installed)
The parameter "Updates" is not already set in the file (e.g. in case of multiple runs)
I am a little puzzled so far how, because I am not sure if this can be done in 1 line of bash code. The rest of the script works fine.
Ok, here's what i think would be a solution for you. Like explained in this article http://www.unix.com/shell-programming-scripting/181221-bash-script-execute-command-remote-servers-using-ssh.html
invoke the script which contains the commands that you want to be executed at the remote server
Code script 1:
while read -r -a line
do
ssh ${line} "bash -s" < script2
done < "$filename"
To replace a line in a text file, you can use sed (http://www.cyberciti.biz/faq/unix-linux-replace-string-words-in-many-files/)
Code script 2:
PARAMETER=Updates
FILE=/etc/.properties
NEWPARAMETER=Updates ###(What you want to write there)
if [ ! -f $FILE] && grep $PARAMETER $FILE; then exit
sed -i 's/$PARAMETER/$NEWPARAMETER/g' $FILE
So, I'm not certain this covers all your use case, I hope this helps you out if there is anything feel free to ask!

.SH/.COMMAND Bash File Mac OSX Not working

I keep getting this error for my script below when I try to run it
The file “test.command” could not be executed because you do not have appropriate access privileges.
Tried changing the read only to read/write. Didn't work
for domain in $(pwgen -1A0B 6 10000); do echo -ne "$domain.com "; if [ -z "$(whois $domain.com | grep -o 'No match for')" ]; then echo -ne "Not "; fi; echo "Available for register"; done >> domains.txt
Any ideas?
You need to give execute permission as well to execute it.
chmod u+x file

Resources