Crontab launches the script, the podcast updates correctly, but none of the file renaming (1st loop) or file moving (2nd loop).
If I run the script from the command-line, works perfectly.
I have added the "echo" lines to troubleshoot, the $file variable is consistent when run through command-line and crontab.
#/bin/sh
# Mad Money updates at 6:40 pm (timezone?) M-F
# At 6:30 pm CST it was ready to download
# http://podcast.cnbc.com/mmpodcast/lightninground.xml
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
echo "paths"
echo $PATH
podcast_folder=$"/home/zenon/podcasts/MAD_MONEY_W__JIM_CRAMER_-_Full_Episode"
episode_folder=$"/mnt/black-2tb-001/plex-server/shows/Mad-Money/Season-1"
hpodder update
sleep 1
hpodder download
sleep 1
cd ${podcast_folder}
for file in "$podcast_folder"/*.mp4; do
echo "Processing ${file}"
#"MadMoney-" Name
name=${file:60:9}
echo "podcast name is ${name}"
#"04" Month
month=${file:69:2}
echo "month is ${month}"
#"18" Day
day=${file:71:2}
echo "day is ${day}"
#"13" yr
yr=${file:73:2}
echo "year is 20${yr}"
title="${name}20${yr}.${month}.${day}.mp4"
echo "file ${file}"
echo "title ${title}"
# cp ${file} ${title}
mv ${file} ${title}
done
cd ${podcast_folder}
for file in "$podcast_folder"/*.mp4; do
chown zenon:plex ${file}
mv ${file} ${episode_folder}
done
# deletes any files older than 9 days
find ${episode_folder} -type f -mtime +9 -exec rm {} \;
exit
here is the debugging output from the script
cat cron.log
paths
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
1 podcast(s) to consider
Get: 4 MAD MONEY W/ JIM CRAMER - Full Episode
100% 1 B/s 0s
0 episode(s) to consider from 1 podcast(s)
0% 0 B/s 0s
Processing $/home/zenon/podcasts/MAD_MONEY_W__JIM_CRAMER_-_Full_Episode/*.mp4
you have a mistake in the first line, you should change
#/bin/sh to #!/bin/sh
Related
I need to run 2 loops in cron which work after every 5 minutes
*/5 * * * *
loop by cron 1. work in every 5 minutes and check if the file is uploaded or not. ($yesterday mean file with name of backdate)
1st cron-loop work fine to me, 2nd cron-loop I am not able to resolve, 2nd loop have 3 conditions
1. It should work when it found $yesterday.zip
2. It should only work once after $yesterday.zip (because its cron so it work after every 5 minutes when $yesterday.zip found)
3. it should not work 00:00 till $yesterday.zip downloaded
($yesterday file has no fix time to download so i run cron every 5 minutes)
I made this (writing below so you . guys dont think i didnt made effort and didnt say show sampel code, just need a if statement with cron include these 3 conditions)
FILE=/fullpath/$yesterday.zip
if test -f "$FILE"; then
touch /fullpath/loop2.txt ##########for loop 2
echo "I am the best"
else
cd /fullpath/
wget -r -np -nH "url/$yesterday.zip" ###########it should be 20+ mb file
find . -name "*.zip" -type 'f' -size -160k -delete ########## it will delete is some garbage downloaded
rm -rf /fullpath/loop2.txt ########## delete the file so it stopped loop 2 for working evry 5 minutes .
fi
FILE2=/fullpath/loop2.txt
if test -f "$FILE2"; then
echo -e "Script will work only once" | mailx -v -s "Script will work only once" myemail#gmail.com
else
echo "full script work"
touch /fullpath/loop2.txt
fi
You guys can ignore my above code and simple let me know if statement for such 3 conditons loop
I would use something like this :
if lockfile -r0 /tmp/lockfile_$(date +%F); then #only run if there's no lockfile for the day
cd /fullpath/
# while we don't have a correct file (absent or truncated)
while [[ ! -f "$yesterday.zip" ]] || [[ $(stat -c %s "$yesterday.zip") < 20971520 ]]; do
wget -r -np -nH "url/$yesterday.zip" # try to download it
if [ $? -ne 0 ]; then # if the file isn't available yet
rm /tmp/lockfile_$(date +%F) # delete the lock in order to attempt the download again in 5"
fi
done
fi
I am trying to run the following cron job from bash (RHEL 7.4), An entry level postgres DB backup script I could write:
#!/bin/bash
# find latest file
echo $PATH
cd /home/postgres/log/
echo "------------ backup starts-----------"
latest_file=$( ls -t | head -n 1 | grep '\.log$' )
echo "latest file"
echo $latest_file
# find older files than above
echo "old file"
old_file=$( find . -maxdepth 1 -name "postgresql*" ! -newer $latest_file -mmin +1 )
if [ -f "$old_file" ]
then
echo $old_file
file_name=${old_file##*/}
echo "file name"
echo $file_name
# zip older file
tar czvf /home/postgres/log/archived_logs/$old_file.gz /home/postgres/log/$file_name
rm -rf /home/postgres/log/$file_name
else
echo "no old file found"
fi
Above is running correctly from shell and performing the intended tasks. It is also echoing needed info.
I have installed it with postgres user (not root) with crontab -e
*/2 * * * * /home/postgres/log/rollup.sh >> /home/postgres/log/logfile.csv 2>&1
It is correctly echoing (text which I have embedded for testing) but not the commands output to the .csv. Although it is not my concern. My concern is , it is not running those few commands at all.
I have given another try by changing the log file (.csv) path to /dev/null and commands in shell script are executing. I am not getting what I am missing here.
.csv file has 777 as permission , just to test
This is my first day scripting, I use linux but needed a script that I have been racking my brain until i finally ask for help. I need to check a directory that has directories already present to see if any new directories are added that are not expected.
Ok I think i have got this as simple as possible. The below works but displays all files in the directory as well. I will keep working at it unless someone can tell me how not to list the files too | I tried ls -d but it is doing the echo "nothing new". I feel like an idiot and should have got this sooner.
#!/bin/bash
workingdirs=`ls ~/ | grep -viE "temp1|temp2|temp3"`
if [ -d "$workingdirs" ]
then
echo "nothing new"
else
echo "The following Direcetories are now present"
echo ""
echo "$workingdirs"
fi
If you want to take some action when a new directory is created, used inotifywait. If you just want to check to see that the directories that exist are the ones you expect, you could do something like:
trap 'rm -f $TMPDIR/manifest' 0
# Create the expected values. Really, you should hand edit
# the manifest, but this is just for demonstration.
find "$Workingdir" -maxdepth 1 -type d > $TMPDIR/manifest
while true; do
sleep 60 # Check every 60 seconds. Modify period as needed, or
# (recommended) use inotifywait
if ! find "$Workingdir" -maxdepth 1 -type d | cmp - $TMPDIR/manifest; then
: Unexpected directories exist or have been removed
fi
done
Below shell script will show directory present or not.
#!/bin/bash
Workingdir=/root/working/
knowndir1=/root/working/temp1
knowndir2=/root/working/temp2
knowndir3=/root/working/temp3
my=/home/learning/perl
arr=($Workingdir $knowndir1 $knowndir2 $knowndir3 $my) #creating an array
for i in ${arr[#]} #checking for each element in array
do
if [ -d $i ]
then
echo "directory $i present"
else
echo "directory $i not present"
fi
done
output:
directory /root/working/ not present
directory /root/working/temp1 not present
directory /root/working/temp2 not present
directory /root/working/temp3 not present
**directory /home/learning/perl present**
This will save the available directories in a list to a file. When you run the script a second time, it will report directories that have been deleted or added.
#!/bin/sh
dirlist="$HOME/dirlist" # dir list file for saving state between runs
topdir='/some/path' # the directory you want to keep track of
tmpfile=$(mktemp)
find "$topdir" -type d -print | sort -o "$tmpfile"
if [ -f "$dirlist" ] && ! cmp -s "$dirlist" "$tmpfile"; then
echo 'Directories added:'
comm -1 -3 "$dirlist" "$tmpfile"
echo 'Directories removed:'
comm -2 -3 "$dirlist" "$tmpfile"
else
echo 'No changes'
fi
mv "$tmpfile" "$dirlist"
The script will have problems with directories that have very exotic names (containing newlines).
I have a bash script that reads 10 lines on a file, each line has the server name that I need to boot up, the boot process takes long and outputs a lot of information, I need to send a "control c" on the script so it can proceed on the next server, or else the loop process will take long on each server during boot process
here is the actual output screen during boot process, it takes more than 30 minutes to complete on each server, loop may continue if control c is sent to the script/automation
$ eb restore environment_id
INFO: restoreEnvironment is starting.
-- Events -- (safe to Ctrl+C) Use "eb abort" to cancel the command.
2017-01-01 12:00 restoreEnvironment is starting
2017-01-01 12:15 Environment health has transitioned to Pending. Initialization in progress (running for 28 seconds). There are no instance
2017-01-01 12:20 Created security group named: sg-123123123
2017-01-01 12:22 Created load balancer named: awseb-e-3-qweasd2-DSLFLSFJHLS
2017-01-01 12:24 Created security group named: sg-123123124
2017-01-01 12:26 Created Auto Scaling launch configuration named: awseb-e-DSLFLSFJHLS-stack-AWSEBAutoScalingLaunchConfiguration-DSLFLSFJHLS
2017-01-01 12:28 Added instance [i-01askjdkasjd123] to your environment.
2017-01-01 12:29 Created Auto Scaling group named: awseb-e-DSLFLSFJHLS-stack-AWSEBAutoScalingLaunchConfiguration-DSLFLSFJHLS
2017-01-01 12:30 Waiting for EC2 instances to launch. This may take a 30 minutes
2017-01-01 13:15 Successfully launched environment: pogi-server
Here is my working script
#!/bin/bash
DIR=/jenkins/workspace/restore-all
INSTANCE_IDS=/jenkins/workspace/environment-ids
EB_FILE=$DIR/server.txt
echo "PROCEEDING TO WORK DIRECTORY"
cd $DIR ; pwd
echo""
echo "CREATING A CODE FOLDER"
mkdir $DIR/code ; pwd ; ls -ltrh $DIR/code
echo""
for APP in `cat $EB_FILE | awk '{print $NF}' | grep -v 0`
do
echo "#########################################"
echo "RESTORING = "$APP
echo ""
echo "COPYING BEANSTALKFILES"
mkdir $DIR/code/$APP ; cd $DIR/code/$APP
cp -pr $DIR/beasntalk/$APP/dev/.e* $DIR/code/$APP
echo ""
echo ""
echo "TRIGGERING EB RESTORE"
cd $DIR/code/$APP
eb restore `tail -1 $INSTANCE_IDS/$APP-dev.txt`
echo ""
echo "REMOVE CODE FOLDER"
cd $DIR/code ; rm -rf $DIR/code/*
echo""
done
echo "REMOVE WORKSPACE FOLDER"
rm -rf $DIR/*
echo""
Have you tried putting the eb restore command in the background with &?
I took a stab a rewriting what you posted. I don't know if it will work due to a few uncertainties (ie what do the contents of server.txt look like, do you copy it into the restore-all directory every time, do you put beanstalk/*/dev/.e* into restore-all each time, etc... etc...), but here it is:
#!/bin/bash
restore-all_dir=/jenkins/workspace/restore-all
InstanceIDs=/jenkins/workspace/environment-ids
EBFile=${restore-all_dir}/server.txt
echo -ne " PROCEEDING TO ${restore-all_dir} ..."
cd ${restore-all_dir}
echo -e "\b\b\b - done\n"
pwd
echo -ne " CREATING FOLDER ${restore-all_dir}/code ..."
mkdir ${restore-all_dir}/code
echo -e "\b\b\b - done\n"
pwd
ls -ltrh ${restore-all_dir}/code
for i in `awk '!/0/ {print $NF}' ${EBFile}`; do
echo "#########################################"
echo -e "#### RESTORING: ${i}\n"
echo -en " COPYING BEANSTALKFILES ..."
mkdir ${restore-all_dir}/code/${i}
cd ${restore-all_dir}/code/${i}
cp -pr ${restore-all_dir}/beanstalk/${i}/dev/.e* ${restore-all_dir}/code/${i}
echo -e "\b\b\b - done\n"
echo -en " TRIGGERING EB RESTORE ..."
cd ${restore-all_dir}/code/${i}
echo "eb restore `tail -1 ${InstanceIDs}/${i}-dev.txt` > ${restore-all_dir}/${i}_output.out &" ## directs output to a file and puts the process into the background
echo -e "\b\b\b - put into background\n output saved in ${restore-all_dir}/${i}_output.out\n"
## Is this necessary in every iteration, or can it be excluded and just removed with the final rm -rf command?
## echo -en " REMOVE CODE FOLDER ..."
## cd ${restore-all_dir}/code
## rm -rf ${restore-all_dir}/code/*
## echo -e "\b\b\b - done\n"
done
## this will loop "working..." until all background eb restore jobs are no longer "Running"
echo -en "\n working..."
while `jobs | awk '/Running/ && /eb restore/ {count++} END{if (count>=1) print "true"; else print "false"}'`; do
echo -en "\b\b\b\b\b\b\b\b\b\b\b working..."
done
echo -e "\b\b\b\b\b\b\b\b\b\b\b - done \n"
## this will also remove the output files unless they are put somewhere else
echo -en " REMOVE WORKSPACE FOLDER ..."
rm -rf ${restore-all_dir}/*
echo -e "\b\b\b - done\n"
My aim is to delete more than 5 days old files which are no longer used by any process.
As a starter I have written following script, but it does not work, says line 10 command not found.
HOME=~/var
cd $HOME
for f in `find . -type f`; do
if [`lsof -n $f`]; then
echo $f
fi
done
hmm ye its not being done properly try this:
#!/bin/bash
DIR=$HOME/var
##########################################################
## files older than 5 days and recursive value set to 1
# for f in $(find $DIR -mtime +5 -maxdepth 1 -type f); do
##########################################################
for f in $(find $DIR -type f); do
# run lsof and look for pattern a-z send it to dev null
lsof -n $f |grep [a-z] > /dev/null
# If it was found the exit status will be 0 or success
if [ $? = 0 ]; then
echo "$f in use -->"
else
echo "File $f not in use"
fi
done
In your script you had defined HOME as ~/var - using squigly line in scripting I would try and stay away from. Secondly you were changing an environment variable's value within your script.. Try from your command line
env|grep HOME
This new method is a lot cleaner
Now here is another pointer that may mean you need to make further changes...
Will your script be running in a cron job ? will it be running as a cron entry as this existing user ? if it is set for root to run then above will fail I will show you how:
echo $HOME
/home/myuser
sudo -i
echo $HOME
/root
notice how the ~ or $HOME value for home has changed.. so if you do decide to run it as a cron entry as another user then try this
scriptuser="your_user"
getent passwd $scriptuser|awk -F":" '{print $6}'
if it is the current user that is then sudo su - or sudo -i then executing script then try :
getent passwd $(logname)|awk -F":" '{print $6}'